17 research outputs found

    External optimal control of fractional parabolic PDEs

    Full text link
    In this paper we introduce a new notion of optimal control, or source identification in inverse, problems with fractional parabolic PDEs as constraints. This new notion allows a source/control placement outside the domain where the PDE is fulfilled. We tackle the Dirichlet, the Neumann and the Robin cases. For the fractional elliptic PDEs this has been recently investigated by the authors in \cite{HAntil_RKhatri_MWarma_2018a}. The need for these novel optimal control concepts stems from the fact that the classical PDE models only allow placing the source/control either on the boundary or in the interior where the PDE is satisfied. However, the nonlocal behavior of the fractional operator now allows placing the control in the exterior. We introduce the notions of weak and very-weak solutions to the parabolic Dirichlet problem. We present an approach on how to approximate the parabolic Dirichlet solutions by the parabolic Robin solutions (with convergence rates). A complete analysis for the Dirichlet and Robin optimal control problems has been discussed. The numerical examples confirm our theoretical findings and further illustrate the potential benefits of nonlocal models over the local ones.Comment: arXiv admin note: text overlap with arXiv:1811.0451

    Fractional Deep Neural Network via Constrained Optimization

    Get PDF
    This paper introduces a novel algorithmic framework for a deep neural network (DNN), which in a mathematically rigorous manner, allows us to incorporate history (or memory) into the network -- it ensures all layers are connected to one another. This DNN, called Fractional-DNN, can be viewed as a time-discretization of a fractional in time nonlinear ordinary differential equation (ODE). The learning problem then is a minimization problem subject to that fractional ODE as constraints. We emphasize that an analogy between the existing DNN and ODEs, with standard time derivative, is well-known by now. The focus of our work is the Fractional-DNN. Using the Lagrangian approach, we provide a derivation of the backward propagation and the design equations. We test our network on several datasets for classification problems. Fractional-DNN offers various advantages over the existing DNN. The key benefits are a significant improvement to the vanishing gradient issue due to the memory effect, and better handling of nonsmooth data due to the network's ability to approximate non-smooth functions

    Non-diffusive Variational Problems with Distributional and Weak Gradient Constraints

    Get PDF
    In this paper, we consider non-diffusive variational problems with mixed boundary conditions and (distributional and weak) gradient constraints. The upper bound in the constraint is either a function or a Borel measure, leading to the state space being a Sobolev one or the space of functions of bounded variation. We address existence and uniqueness of the model under low regularity assumptions, and rigorously identify its Fenchel pre-dual problem. The latter in some cases is posed on a non-standard space of Borel measures with square integrable divergences. We also establish existence and uniqueness of solutions to this pre-dual problem under some assumptions. We conclude the paper by introducing a mixed finite-element method to solve the primal-dual system. The numerical examples confirm our theoretical findings.NSFAir Force Office of Scientific Research (AFOSR)Department of NavyFunded by Naval Postgraduate School.DMS-1818772DMS-2012391DMS-1913004FA9550-19-1-0036N00244-20-1-000

    Non-diffusive Variational Problems with Distributional and Weak Gradient Constraints

    Get PDF
    In this paper, we consider non-diffusive variational problems with mixed boundary conditions and (distributional and weak) gradient constraints. The upper bound in the constraint is either a function or a Borel measure, leading to the state space being a Sobolev one or the space of functions of bounded variation. We address existence and uniqueness of the model under low regularity assumptions, and rigorously identify its Fenchel pre-dual problem. The latter in some cases is posed on a non-standard space of Borel measures with square integrable divergences. We also establish existence and uniqueness of solutions to this pre-dual problem under some assumptions. We conclude the paper by introducing a mixed finite-element method to solve the primal-dual system. The numerical examples confirm our theoretical findings

    Learning Control Policies of Hodgkin-Huxley Neuronal Dynamics

    Full text link
    We present a neural network approach for closed-loop deep brain stimulation (DBS). We cast the problem of finding an optimal neurostimulation strategy as a control problem. In this setting, control policies aim to optimize therapeutic outcomes by tailoring the parameters of a DBS system, typically via electrical stimulation, in real time based on the patient's ongoing neuronal activity. We approximate the value function offline using a neural network to enable generating controls (stimuli) in real time via the feedback form. The neuronal activity is characterized by a nonlinear, stiff system of differential equations as dictated by the Hodgkin-Huxley model. Our training process leverages the relationship between Pontryagin's maximum principle and Hamilton-Jacobi-Bellman equations to update the value function estimates simultaneously. Our numerical experiments illustrate the accuracy of our approach for out-of-distribution samples and the robustness to moderate shocks and disturbances in the system.Comment: Extended Abstract presented at Machine Learning for Health (ML4H) symposium 2023, December 10th, 2023, New Orleans, United States, 12 page

    Efficient Neural Network Approaches for Conditional Optimal Transport with Applications in Bayesian Inference

    Full text link
    We present two neural network approaches that approximate the solutions of static and dynamic conditional optimal transport (COT) problems, respectively. Both approaches enable sampling and density estimation of conditional probability distributions, which are core tasks in Bayesian inference. Our methods represent the target conditional distributions as transformations of a tractable reference distribution and, therefore, fall into the framework of measure transport. COT maps are a canonical choice within this framework, with desirable properties such as uniqueness and monotonicity. However, the associated COT problems are computationally challenging, even in moderate dimensions. To improve the scalability, our numerical algorithms leverage neural networks to parameterize COT maps. Our methods exploit the structure of the static and dynamic formulations of the COT problem. PCP-Map models conditional transport maps as the gradient of a partially input convex neural network (PICNN) and uses a novel numerical implementation to increase computational efficiency compared to state-of-the-art alternatives. COT-Flow models conditional transports via the flow of a regularized neural ODE; it is slower to train but offers faster sampling. We demonstrate their effectiveness and efficiency by comparing them with state-of-the-art approaches using benchmark datasets and Bayesian inverse problems.Comment: 25 pages, 7 tables, 8 figure
    corecore